Mathematical modeling requires dimensional analysis.
Expand in power series of a parameter or a variable, as binomial expansions or Taylor series.
Binomial theorem
\[ \begin{align} (a+b)^{n} = \sum^{n}_{m=0} \frac{n!}{m! (n-m)!} a^{n-m} b^{m} \end{align} \]
valid for any positive or negative number \(n\) provided that \(|b/a| < 1\).
Taylor expansions
\[ \begin{align} f(x) = \sum^{\infty}_{n=0} \frac{f^{(n)}(x_{0})}{n!} (x - x_{0})^{n} \end{align} \]
Frequently used series
\[ \begin{align} sinx &= \\ cosx &= \\ e^{x} &= \\ ln(1+x) &= \\ \end{align} \]
Interested in \(f(\varepsilon), \varepsilon \rightarrow 0\). Suppose parameters are normalized so that \(\varepsilon \geq 0\). For classification, divide by the rate at which it tends to \(0\) or infinity. Compare with known functions. These comparison functions are called gauge functions. The simplest and most useful are the powers of \(\varepsilon\) and inverse powers of \(\varepsilon\). \(e^{-1/\varepsilon}\) tends to \(0\) faster than any power of \(\varepsilon\). Therefore the power of \(\varepsilon\) are not complete and must be supplemented by \(e^{-1/\varepsilon}\). Similarly, supplements include \(ln(1/\varepsilon)\) etc.
\(sin \epsilon\) tends to 0 at the same rate that \(\epsilon\) tends to \(0\).
\[ \begin{align} sin \epsilon = O(\epsilon) \quad as \quad \epsilon \rightarrow 0 \end{align} \]
In general, we put
\[ \begin{align} f(\epsilon) = O \big ( g(\epsilon) \big ) \quad as \quad \epsilon \rightarrow 0 \end{align} \]
if
\[ \begin{align} lim_{\epsilon \rightarrow 0} \frac{f(\epsilon)}{g(\epsilon)} = A, \quad 0 < |A| < \infty \end{align} \]
Find the value of the integral
\[ \begin{align} f(\omega) = \int^{\infty}_{0} \frac{\omega e^{-x}}{\omega + x} dx \end{align} \]
Use Laplace’s method.
\[ \begin{align} \frac{\omega}{\omega+x} = \frac{1}{1+\omega^{-1}x} = 1 - \frac{x}{\omega} + \frac{x^{2}}{\omega^{2}} - \frac{x^{3}}{\omega^{3}} + ... = \sum^{\infty}_{n=0} \frac{(-1)^{n} x^{n}}{\omega^{n}} \end{align} \]
which converges for \(x < \omega\).
Substitute the series into the original integral
\[ \begin{align} f(\omega) = \sum^{\infty}_{n=0} \Big [ \frac{(-1)^{n}}{\omega^{n}} \int^{\infty}_{0} x^{n} e^{-x} dx \Big ] \end{align} \]
Since
\[ \begin{align} \int^{\infty}_{0} x^{n} e^{-x} dx = n! \end{align} \]
where \(n\) is integer, so
\[ \begin{align} f(\omega) = \sum^{\infty}_{n=0} \frac{(-1)^{n} n!}{\omega^{n}} \end{align} \]
which diverges for all values of \(\omega\).
To give a convergent series of such form, we truncate the series
\[ \begin{align} \frac{\omega}{\omega+x} = \sum^{N}_{n=0} \frac{(-1)^{n} x^{n}}{\omega^{n}} \end{align} \]
which is a geometric series whose sum is
\[ \begin{align} \frac{1 - (-x/\omega)^{N+1}}{1+x/\omega} \end{align} \]
Hence, we separate the series to 2 parts
\[ \begin{align} \frac{\omega}{\omega + x} &= \sum^{N}_{n=0} \frac{(-1)^{n} x^{n}}{\omega^{n}} + \hat{R}_{N}(x, \omega) \\ \hat{R}_{N}(x, \omega) &= \frac{\omega}{\omega + x} - \sum^{N}_{n=0} \frac{(-1)^{n} x^{n}}{\omega^{n}} = \frac{(-x)^{N+1}}{\omega^{N}(\omega+x)} \\ \end{align} \]
In this form
\[ \begin{align} f(\omega) &= \sum^{N}_{n=0} \frac{(-1)^{n} n!}{\omega^{n}} + R_{N}(\omega) \\ R_{N}(\omega) &= \frac{(-1)^{N+1}}{\omega^{N}} \int^{\infty}_{0} \frac{x^{N+1} e^{-x}}{\omega+x} dx \\ \end{align} \]
which is valid for \(R_{N}(\omega) \rightarrow 0\).
One way to match is fixing \(\omega\) as \(N \rightarrow \infty\). However, \(R_{N}(\omega) \rightarrow \infty as N \rightarrow \infty\).
Another way is fixing \(N as \omega \rightarrow \infty\). We note
\[ \begin{align} \frac{1}{\omega+x} &< \frac{1}{x} \\ |R_{N}(\omega)| &< \frac{1}{\omega^{N}} \int^{\infty}_{0} x^{N} e^{-x} dx = \frac{N!}{\omega^{N}} \\ \end{align} \]
Then \(R_{N}(\omega) \rightarrow 0 as \omega \rightarrow \infty\) for \(N\) fixed. That’s where the series converges.
In general, as asymptotic series is
\[ \begin{align} f(\omega) \sim \sum^{\infty}_{n=0} \frac{a_{n}}{\omega^{n}} \quad as \quad \omega \rightarrow \infty \end{align} \]
if and only if
\[ \begin{align} f(\omega) ~ \sum^{N}_{n=0} \frac{a_{n}}{\omega^{n}} + O(\frac{1}{\omega^{N}}) \quad as \quad \omega \rightarrow \infty \end{align} \]
There is always an optimum value of \(N\) for a given \(\omega\).
Since we can not use powers of \(\epsilon\) alone to determine an asymptotic representation of a given function, we use a general sequence of functions \(\delta_{n}(\epsilon)\) as long as (??? I think they should not be in the same order or the series will be divergent.)
\[ \begin{align} \delta_{n}(\epsilon) = O \big ( \delta_{n-1}(\epsilon) \big ) \quad \epsilon \quad \rightarrow 0 \end{align} \]
Such a sequence is called an asymptotic sequence.
In terms of asymptotic sequences, define asymptotic expansions as follows
\[ \begin{align} f(\epsilon) \sim \sum^{\infty}_{n=0} a_{n} \delta_{n}(\epsilon) \quad as \quad \epsilon \rightarrow 0 \end{align} \]
where \(a_{n}\) are independent of \(\epsilon\), and \(\delta(\epsilon)\) is an asymptotic sequence.
Next step is to determine the coefficients.
Although a given function can be represented by infinite number of asymptotic series, given an asymptotic sequence, the coefficients are unique and can be determined as follows.
\[ \begin{align} a_{n} = lim_{\epsilon \rightarrow 0} \frac{f(\epsilon) - \sum^{n-1}_{m=0} a_{m} \delta_{m} (\epsilon)}{\delta_{n}(\epsilon)} \end{align} \]
because
\[ \begin{align} lim_{\epsilon \rightarrow 0} \frac{\delta_{n}(\epsilon)}{\delta_{m}(\epsilon)} \quad for \quad n > m \end{align} \]
I can not put the differences clearly currently. However, the main idea seems to be that convergent series result in inaccurate value when dependent variable varies much while asymptotic series lead to much more accurate result with much less terms, because the error committed by truncating the series is the order of the first neglected term.
The general procedure to employ asymptotic expansions is to assume expansions substitute them into the equations and then perform elementary operations to get solutions. However, some operations may lead to singularies and nonuniformities.
Nonuniformity means that the error by truncating series is not the order of the first neglected term. We always check whether the obtained expressions are uniform or not.